49 research outputs found

    Thickening of galactic disks through clustered star formation

    Full text link
    (Abridged) The building blocks of galaxies are star clusters. These form with low-star formation efficiencies and, consequently, loose a large part of their stars that expand outwards once the residual gas is expelled by the action of the massive stars. Massive star clusters may thus add kinematically hot components to galactic field populations. This kinematical imprint on the stellar distribution function is estimated here by calculating the velocity distribution function for ensembles of star-clusters distributed as power-law or log-normal initial cluster mass functions (ICMFs). The resulting stellar velocity distribution function is non-Gaussian and may be interpreted as being composed of multiple kinematical sub-populations. The notion that the formation of star-clusters may add hot kinematical components to a galaxy is applied to the age--velocity-dispersion relation of the Milky Way disk to study the implied history of clustered star formation, with an emphasis on the possible origin of the thick disk.Comment: MNRAS, accepted, 27 pages, 9 figure

    Bayesian history matching of complex infectious disease models using emulation: A tutorial and a case study on HIV in Uganda

    Get PDF
    Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs

    Kalman tracking of linear predictor and harmonic noise models for noisy speech enhancement

    Get PDF
    This paper presents a speech enhancement method based on the tracking and denoising of the formants of a linear prediction (LP) model of the spectral envelope of speech and the parameters of a harmonic noise model (HNM) of its excitation. The main advantages of tracking and denoising the prominent energy contours of speech are the efficient use of the spectral and temporal structures of successive speech frames and a mitigation of processing artefact known as the ‘musical noise’ or ‘musical tones’.The formant-tracking linear prediction (FTLP) model estimation consists of three stages: (a) speech pre-cleaning based on a spectral amplitude estimation, (b) formant-tracking across successive speech frames using the Viterbi method, and (c) Kalman filtering of the formant trajectories across successive speech frames.The HNM parameters for the excitation signal comprise; voiced/unvoiced decision, the fundamental frequency, the harmonics’ amplitudes and the variance of the noise component of excitation. A frequency-domain pitch extraction method is proposed that searches for the peak signal to noise ratios (SNRs) at the harmonics. For each speech frame several pitch candidates are calculated. An estimate of the pitch trajectory across successive frames is obtained using a Viterbi decoder. The trajectories of the noisy excitation harmonics across successive speech frames are modeled and denoised using Kalman filters.The proposed method is used to deconstruct noisy speech, de-noise its model parameters and then reconstitute speech from its cleaned parts. Experimental evaluations show the performance gains of the formant tracking, pitch extraction and noise reduction stages

    Learning of model discrepancy for structural dynamics applications using Bayesian history matching

    Get PDF
    Calibration of computer models for structural dynamics is often an important task in creating valid predictions that match observational data. However, calibration alone will lead to biased estimates of system parameters when a mechanism for model discrepancy is not included. The definition of model discrepancy is the mismatch between observational data and the model when the 'true' parameters are known. This will occur due to the absence and/or simplification of certain physics in the computer model. Bayesian History Matching (BHM) is a 'likelihood-free' method for obtaining calibrated outputs whilst accounting for model discrepancies, typically via an additional variance term. The approach assesses the input space, using an emulator of the complex computer model, and identifies parameter sets that could have plausibly generated the target outputs. In this paper a more informative methodology is outlined where the functional form of the model discrepancy is inferred, improving predictive performance. The algorithm is applied to a case study for a representative five storey building structure with the objective of calibrating outputs of a finite element (FE) model. The results are discussed with appropriate validation metrics that consider the complete distribution

    Nitrosative and Oxidative Stresses Contribute to Post-Ischemic Liver Injury Following Severe Hemorrhagic Shock: The Role of Hypoxemic Resuscitation

    Get PDF
    Purpose: Hemorrhagic shock and resuscitation is frequently associated with liver ischemia-reperfusion injury. The aim of the study was to investigate whether hypoxemic resuscitation attenuates liver injury. Methods: Anesthetized, mechanically ventilated New Zealand white rabbits were exsanguinated to a mean arterial pressure of 30 mmHg for 60 minutes. Resuscitation under normoxemia (Normox-Res group, n = 16, PaO2 = 95–105 mmHg) or hypoxemia (Hypox-Res group, n = 15, PaO 2 = 35–40 mmHg) followed, modifying the FiO 2. Animals not subjected to shock constituted the sham group (n = 11, PaO 2 = 95–105 mmHg). Indices of the inflammatory, oxidative and nitrosative response were measured and histopathological and immunohistochemical studies of the liver were performed. Results: Normox-Res group animals exhibited increased serum alanine aminotransferase, tumor necrosis factor- alpha, interleukin (IL)-1b and IL-6 levels compared with Hypox-Res and sham groups. Reactive oxygen species generation, malondialdehyde formation and myeloperoxidase activity were all elevated in Normox-Res rabbits compared with Hypox-Res and sham groups. Similarly, endothelial NO synthase and inducible NO synthase mRNA expression was up-regulated and nitrotyrosine immunostaining increased in animals resuscitated normoxemically, indicating a more intense nitrosative stress. Hypox-Res animals demonstrated a less prominent histopathologic injury which was similar to sham animals. Conclusions: Hypoxemic resuscitation prevents liver reperfusion injury through attenuation of the inflammatory respons

    AutoEPG: software for the analysis of electrical activity in the microcircuit underpinning feeding behaviour of caenorhabditis elegans

    Get PDF
    BackgroundThe pharyngeal microcircuit of the nematode Caenorhabditis elegans serves as a model for analysing neural network activity and is amenable to electrophysiological recording techniques. One such technique is the electropharyngeogram (EPG) which has provided insight into the genetic basis of feeding behaviour, neurotransmission and muscle excitability. However, the detailed manual analysis of the digital recordings necessary to identify subtle differences in activity that reflect modulatory changes within the underlying network is time consuming and low throughput. To address this we have developed an automated system for the high-throughput and discrete analysis of EPG recordings (AutoEPG).Methodology/Principal FindingsAutoEPG employs a tailor made signal processing algorithm that automatically detects different features of the EPG signal including those that report on the relaxation and contraction of the muscle and neuronal activity. Manual verification of the detection algorithm has demonstrated AutoEPG is capable of very high levels of accuracy. We have further validated the software by analysing existing mutant strains with known pharyngeal phenotypes detectable by the EPG. In doing so, we have more precisely defined an evolutionarily conserved role for the calcium-dependent potassium channel, SLO-1, in modulating the rhythmic activity of neural networks.Conclusions/SignificanceAutoEPG enables the consistent analysis of EPG recordings, significantly increases analysis throughput and allows the robust identification of subtle changes in the electrical activity of the pharyngeal nervous system. It is anticipated that AutoEPG will further add to the experimental tractability of the C. elegans pharynx as a model neural circuit

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    The effect of the nugget on Gaussian process emulators of computer models

    No full text
    The effect of a Gaussian process parameter known as the nugget, on the development of computer model emulators is investigated. The presence of the nugget results in an emulator that does not interpolate the data and attaches a non-zero uncertainty bound around them. The limits of this approximation are investigated theoretically, and it is shown that they can be as large as those of a least squares model with the same regression functions as the emulator, regardless of the nugget’s value. The likelihood of the correlation function parameters is also studied and two mode types are identified. Type I modes are characterised by an approximation error that is a function of the nugget and can therefore become arbitrarily small, effectively yielding an interpolating emulator. Type II modes result in emulators with a constant approximation error. Apart from a theoretical investigation of the limits of the approximation error, a practical method for automatically imposing restrictions on its extent is introduced. This is achieved by means of a penalty term that is added to the likelihood function, and controls the amount of unexplainable variability in the computer model. The main findings are illustrated on data from an Energy Balance climate model

    Concentration-dependent effects of acute and chronic ethanol on C. elegans micro-circuits and behaviour

    No full text
    The concentration dependence of the acute and chronic effects of alcohol represents a complex interacting response involving the modification of gene expression, proteins and membranes. However, the concerted manner by which these interactions, with overlapping concentration dependence, co-ordinate neuro-adaptive behaviours are poorly understood. We are utilising the nervous system of C. elegans, in particular functional investigation of defined micro-circuits, to gain insight into the neural substrates of alcohol dependence. In doing so we are applying both electrophysiological and behavioural assays to wild type and mutant animals in combination with novel methods of semi-automated analysis techniques. Electropharyngeogram (EPG) recording from a semi-intact preparation of the C. elegans pharynx provides an opportunity to probe the acute effects of ethanol upon the activity of a simple, anatomically defined micro-circuit. We have identified the network activity of the pharyngeal nervous system is modulated by ethanol in a concentration-dependent manner. Excitation of the pharyngeal network was recorded in the presence of ethanol concentrations equivalent to mammalian intoxication levels (1-20mM), and inhibition at supra-intoxicating concentrations (>100mM), hence allowing us to determine effectors of acute ethanol across a range of concentrations. In addition we can observe a withdrawal response in animals that have undergone a chronic pre-exposure to ethanol, and withdrawal relief in acute low doses (10mM) of ethanol. We are currently probing the specificity of the pharyngeal response to ethanol using alcohols with increasing carbon chain length. In combination with electrophysiological analysis we have developed a behavioural assay that enables us to probe the basis of the neuro-adaptation that occurs upon chronic ethanol exposure at the level of the intact nervous system. We can demonstrate withdrawal and relief from withdrawal using this assay. In addition we are currently investigating the temporal dynamics and concentration dependence of this neural plasticity. We intend to apply the automated behavioural analysis technique we have developed to quantify subtle locomotory responses at low doses of ethanol, which have been shown to alter pharyngeal activity
    corecore